Multimodal deep learning has been used to predict clinical endpoints and diagnoses from clinical routine data. However, these models suffer from scaling issues: they have to learn pairwise interactions between each piece of information in each data type, thereby escalating model complexity beyond manageable scales. This has so far precluded a widespread use of multimodal deep learning. Here, we present a new technical approach of "learnable synergies", in which the model only selects relevant interactions between data modalities and keeps an "internal memory" of relevant data. Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine. We demonstrate this approach on three large multimodal datasets from radiology and ophthalmology and show that it outperforms state-of-the-art models in clinically relevant diagnosis tasks. Our new approach is transferable and will allow the application of multimodal deep learning to a broad set of clinically relevant problems.
translated by 谷歌翻译
诸如DALL-E 2之类的生成模型可以代表放射学中人工智能研究的图像生成,增强和操纵的有希望的未来工具,前提是这些模型具有足够的医疗领域知识。在这里,我们证明DALL-E 2在零拍的文本到图像生成方面,学习了具有有希望的功能的X射线图像的相关表示,将图像的延续超出其原始边界或删除元素,尽管病理产生或CT,MRI和超声图像仍然受到限制。因此,即使事先需要对这些模型进行进一步的微调和适应,也需要使用生成模型来增强和生成放射学数据似乎是可行的。
translated by 谷歌翻译